A framework for improved training of Sigma-Pi networks

نویسندگان

  • Malcolm I. Heywood
  • Peter Noakes
چکیده

This paper proposes and demonstrates a framework for Sigma-Pi networks such that the combinatorial increase in product terms is avoided. This is achieved by only implementing a subset of the possible product terms (sub-net Sigma-Pi). Application of a dynamic weight pruning algorithm enables redundant weights to be removed and replaced during the learning process, hence permitting access to a larger weight space than employed at network initialization. More than one learning rate is applied to ensure that the inclusion of higher order descriptors does not result in over description of the training set (memorization). The application of such a framework is tested using a problem requiring significant generalization ability. Performance of the resulting sub-net Sigma-Pi network is compared to that returned by optimal multi-layer perceptrons and general Sigma-Pi solutions.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Application of Multi-Layer Artificial Neural Networks in Speckle Reduction (Methodology)

Optical Coherence Tomography (OCT) uses the spatial and temporal coherence properties of optical waves backscattered from a tissue sample to form an image. An inherent characteristic of coherent imaging is the presence of speckle noise. In this study we use a new ensemble framework which is a combination of several Multi-Layer Perceptron (MLP) neural networks to denoise OCT images. The noise is...

متن کامل

Integer Weight Higher-Order Neural Network Training Using Distributed Differential Evolution

We study the class of Higher-Order Neural Networks and especially the Pi-Sigma Networks. The performance of Pi-Sigma Networks is evaluated through several well known neural network training benchmarks. In the experiments reported here, Distributed Evolutionary Algorithms for Pi-Sigma networks training are presented. More specifically the distributed version of the Differential Evolution algorit...

متن کامل

Hardware-friendly Higher-Order Neural Network Training using Distributed Evolutionary Algorithms

In this paper, we study the class of Higher-Order Neural Networks and especially the Pi-Sigma Networks. The performance of Pi-Sigma Networks is evaluated through several well known Neural Network Training benchmarks. In the experiments reported here, Distributed Evolutionary Algorithms are implemented for Pi-Sigma neural networks training. More specifically the distributed versions of the Diffe...

متن کامل

Evolutionary Algorithm Training of Higher Order Neural Networks

This chapter aims to further explore the capabilities of the Higher Order Neural Networks class and especially the Pi-Sigma Neural Networks. The performance of Pi-Sigma Networks is evaluated through several well known neural network training benchmarks. In the experiments reported here, Distributed Evolutionary Algorithms are implemented for Pi-Sigma neural networks training. More specifically,...

متن کامل

Higher-Order Neural Networks Training Using Differential Evolution

In this contribution, we study the class of Higher-Order Neural Networks, especially Pi-Sigma Networks. The performance of Pi-Sigma Networks is considered through well known neural network training problems. In our experiments, for the training process, we used Evolutionary Algorithms and more specifically the Differential Evolution algorithm. Preliminary results suggest that this training proc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE transactions on neural networks

دوره 6 4  شماره 

صفحات  -

تاریخ انتشار 1995